14 research outputs found

    Evaluation of the OQuaRE framework for ontology quality

    Get PDF
    International audienceThe increasing importance of ontologies has resulted in the development of a large number of ontologies in both coordinated and non-coordinated efforts. The number and complexity of such ontologies make hard to ontology and tool developers to select which ontologies to use and reuse. So far, there are no mechanism for making such decisions in an informed manner. Consequently, methods for evaluating ontology quality are required. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies. OQuaRE has been applied to identify the strengths and weaknesses of different ontologies but, so far, this framework has not been evaluated itself. Therefore, in this paper we present the evaluation of OQuaRE, performed by an international panel of experts in ontology engineering. The results include the positive and negative aspects of the current version of OQuaRE, the completeness and utility of the quality metrics included in OQuaRE and the comparison between the results of the manual evaluations done by the experts and the ones obtained by a software implementation of OQuaRE

    Supporting the analysis of ontology evolution processes through the combination of static and dynamic scaling functions in OQuaRE

    No full text
    BACKGROUND: The biomedical community has now developed a significant number of ontologies. The curation of biomedical ontologies is a complex task and biomedical ontologies evolve rapidly, so new versions are regularly and frequently published in ontology repositories. This has the implication of there being a high number of ontology versions over a short time span. Given this level of activity, ontology designers need to be supported in the effective management of the evolution of biomedical ontologies as the different changes may affect the engineering and quality of the ontology. This is why there is a need for methods that contribute to the analysis of the effects of changes and evolution of ontologies. RESULTS: In this paper we approach this issue from the ontology quality perspective. In previous work we have developed an ontology evaluation framework based on quantitative metrics, called OQuaRE. Here, OQuaRE is used as a core component in a method that enables the analysis of the different versions of biomedical ontologies using the quality dimensions included in OQuaRE. Moreover, we describe and use two scales for evaluating the changes between the versions of a given ontology. The first one is the static scale used in OQuaRE and the second one is a new, dynamic scale, based on the observed values of the quality metrics of a corpus defined by all the versions of a given ontology (life-cycle). In this work we explain how OQuaRE can be adapted for understanding the evolution of ontologies. Its use has been illustrated with the ontology of bioinformatics operations, types of data, formats, and topics (EDAM). CONCLUSIONS: The two scales included in OQuaRE provide complementary information about the evolution of the ontologies. The application of the static scale, which is the original OQuaRE scale, to the versions of the EDAM ontology reveals a design based on good ontological engineering principles. The application of the dynamic scale has enabled a more detailed analysis of the evolution of the ontology, measured through differences between versions. The statistics of change based on the OQuaRE quality scores make possible to identify key versions where some changes in the engineering of the ontology triggered a change from the OQuaRE quality perspective. In the case of the EDAM, this study let us to identify that the fifth version of the ontology has the largest impact in the quality metrics of the ontology, when comparative analyses between the pairs of consecutive versions are performed

    Significant effect of training in some topics.

    No full text
    <p>22 subcharacteristics presented significant effect due to the GoodOD based training for some topics (PRO, IMM, CLO, CME, INF, SPA).</p

    OQuaRE Metrics.

    No full text
    <p>Definition of the metrics used in OQuaRE. In the definition of the metrics, number refers to assertions in the ontology and not to the number of entities defined in the ontology.</p

    Factor loadings of topics in Principal Component Analysis.

    No full text
    <p>Untrained case (left) and trained case (right). With the two more important components, more than 98% of total variance was explained for untrained students, and more than 92% of total variance for trained ones.</p

    Significance levels of testing difference of means.

    No full text
    <p>Differences between means of untrained and trained groups () and mean distances to the gold standard of trained and untrained groups []: The character * is used to indicate the significance level of the differences: * significant, ** very significant and *** highly significant.</p

    No significant effect of training in any topic.

    No full text
    <p>Seven OQuaRE subcharacteristics presented no significant effect of the training for any topic (PRO, IMM, CLO, CME, INF, SPA), and their mean values were similar for students and the gold standard.</p
    corecore